预训练在机器学习的不同领域表现出成功,例如计算机视觉,自然语言处理(NLP)和医学成像。但是,尚未完全探索用于临床数据分析。记录了大量的临床记录,但是对于在小型医院收集的数据或处理罕见疾病的数据仍可能稀缺数据和标签。在这种情况下,对较大的未标记临床数据进行预训练可以提高性能。在本文中,我们提出了专为异质的多模式临床数据设计的新型无监督的预训练技术,用于通过蒙版语言建模(MLM)启发的患者预测,通过利用对人群图的深度学习来启发。为此,我们进一步提出了一个基于图形转换器的网络,该网络旨在处理异质临床数据。通过将基于掩盖的预训练与基于变压器的网络相结合,我们将基于掩盖的其他域中训练的成功转化为异质临床数据。我们使用三个医学数据集Tadpole,Mimic-III和一个败血症预测数据集,在自我监督和转移学习设置中展示了我们的预训练方法的好处。我们发现,我们提出的培训方法有助于对患者和人群水平的数据进行建模,并提高所有数据集中不同微调任务的性能。
translated by 谷歌翻译
预训练在机器学习的不同领域表现出成功,例如计算机视觉(CV),自然语言处理(NLP)和医学成像。但是,尚未完全探索用于临床数据分析。即使记录了大量的电子健康记录(EHR)数据,但如果数据收集到小型医院或处理罕见疾病的交易,数据和标签也可能稀缺。在这种情况下,对较大的EHR数据进行预训练可以改善模型性能。在本文中,我们将无监督的预培训应用于异质的多模式EHR数据,以预测患者。为了对这些数据进行建模,我们利用大量的人群图表。我们首先设计基于图形变压器的网络体系结构,旨在处理EHR数据中发生的各种输入特征类型,例如连续,离散和时间序列特征,从而允许更好的多模式数据融合。此外,我们设计基于蒙版的插入方法的预训练方法,以在对不同的最终任务进行微调之前对网络进行预培训。预训练是以一种完全无监督的方式进行的,这为未来具有不同任务和类似方式的大型公共数据集预先培训奠定了基础。我们在两个患者记录的医学数据集(Tadpole和Mimic-III)上测试我们的方法,包括成像和非成像功能以及不同的预测任务。我们发现,我们提出的基于图形的预训练方法有助于在人群水平上对数据进行建模,并进一步改善Mimic的AUC方面的AUC,平均AUC的性能,而Tadpole则为7.64%。
translated by 谷歌翻译
Text-guided image editing can have a transformative impact in supporting creative applications. A key challenge is to generate edits that are faithful to input text prompts, while consistent with input images. We present Imagen Editor, a cascaded diffusion model built, by fine-tuning Imagen on text-guided image inpainting. Imagen Editor's edits are faithful to the text prompts, which is accomplished by using object detectors to propose inpainting masks during training. In addition, Imagen Editor captures fine details in the input image by conditioning the cascaded pipeline on the original high resolution image. To improve qualitative and quantitative evaluation, we introduce EditBench, a systematic benchmark for text-guided image inpainting. EditBench evaluates inpainting edits on natural and generated images exploring objects, attributes, and scenes. Through extensive human evaluation on EditBench, we find that object-masking during training leads to across-the-board improvements in text-image alignment -- such that Imagen Editor is preferred over DALL-E 2 and Stable Diffusion -- and, as a cohort, these models are better at object-rendering than text-rendering, and handle material/color/size attributes better than count/shape attributes.
translated by 谷歌翻译
Continual Learning, also known as Lifelong or Incremental Learning, has recently gained renewed interest among the Artificial Intelligence research community. Recent research efforts have quickly led to the design of novel algorithms able to reduce the impact of the catastrophic forgetting phenomenon in deep neural networks. Due to this surge of interest in the field, many competitions have been held in recent years, as they are an excellent opportunity to stimulate research in promising directions. This paper summarizes the ideas, design choices, rules, and results of the challenge held at the 3rd Continual Learning in Computer Vision (CLVision) Workshop at CVPR 2022. The focus of this competition is the complex continual object detection task, which is still underexplored in literature compared to classification tasks. The challenge is based on the challenge version of the novel EgoObjects dataset, a large-scale egocentric object dataset explicitly designed to benchmark continual learning algorithms for egocentric category-/instance-level object understanding, which covers more than 1k unique main objects and 250+ categories in around 100k video frames.
translated by 谷歌翻译
As machine translation (MT) metrics improve their correlation with human judgement every year, it is crucial to understand the limitations of such metrics at the segment level. Specifically, it is important to investigate metric behaviour when facing accuracy errors in MT because these can have dangerous consequences in certain contexts (e.g., legal, medical). We curate ACES, a translation accuracy challenge set, consisting of 68 phenomena ranging from simple perturbations at the word/character level to more complex errors based on discourse and real-world knowledge. We use ACES to evaluate a wide range of MT metrics including the submissions to the WMT 2022 metrics shared task and perform several analyses leading to general recommendations for metric developers. We recommend: a) combining metrics with different strengths, b) developing metrics that give more weight to the source and less to surface-level overlap with the reference and c) explicitly modelling additional language-specific information beyond what is available via multilingual embeddings.
translated by 谷歌翻译
We develop a simple framework to learn bio-inspired foraging policies using human data. We conduct an experiment where humans are virtually immersed in an open field foraging environment and are trained to collect the highest amount of rewards. A Markov Decision Process (MDP) framework is introduced to model the human decision dynamics. Then, Imitation Learning (IL) based on maximum likelihood estimation is used to train Neural Networks (NN) that map human decisions to observed states. The results show that passive imitation substantially underperforms humans. We further refine the human-inspired policies via Reinforcement Learning (RL) using the on-policy Proximal Policy Optimization (PPO) algorithm which shows better stability than other algorithms and can steadily improve the policies pretrained with IL. We show that the combination of IL and RL can match human results and that good performance strongly depends on combining the allocentric information with an egocentric representation of the environment.
translated by 谷歌翻译
神经指标与机器翻译系统评估中的人类判断达到了令人印象深刻的相关性,但是在我们可以安全地针对此类指标进行优化之前,我们应该意识到(并且理想地消除)偏向获得高分的不良翻译的偏见。我们的实验表明,基于样本的最小贝叶斯风险解码可用于探索和量化此类弱点。在将此策略应用于彗星进行ende和de-en时,我们发现彗星模型不足以差异和命名实体差异。我们进一步表明,通过简单地培训其他合成数据并发布我们的代码和数据以促进进一步的实验,这些偏见很难完全消除。
translated by 谷歌翻译
扩张的卷积基本上是通过定期插入内核元素之间的空格而创建的更宽内核的卷积。在本文中,我们提出了一种新版本的扩张卷积,其中通过通过插值技术通过反向化进行了学习的间距。我们称这种方法“通过学习间距扩张卷积”(DCLS),并推广其对N维卷积案例的方法。但是,我们这里的主要焦点将是我们开发了两种实现的2D案例:一个天真的外壳:一个天真的一个,它构建了适合小的扩张率的扩张内核,以及使用“IM2COL的修改版本的时间/记忆有效的内核” “ 算法。然后,我们通过DCLS ONE通过简单的替换,我们如何通过简单的替换DCLS替换该技术如何通过简单的替换置换古典扩张的卷积层对Pascal VOC 2012 DataSet上的现有架构的准确性。此外,我们表明DCLS允许减少最近Convmixer架构中使用的深度卷曲的学习参数的数量,其因子3具有NO或非常低的准确性,并且通过用稀疏DCLS替换大型密集内核。该方法的代码基于Pytorch,可用于:https://github.com/k-h-imail/dilated-convolution-with-learnable-pacings-pytorch。
translated by 谷歌翻译
在不同的持续学习场景中可以经验经验评估模型的能力。每种情况都定义了限制和学习环境的机会。在这里,我们挑战了持续学习文学中的当前趋势,主要是在类渐进式场景上进行实验,其中一项经验中的课程从未被重新审视。我们对这种环境的过度注重可能是对持续学习的未来研究来限制,因为类增量场景人为地加剧了灾难性的遗忘,以牺牲其他重要目标等于前向传递和计算效率。在许多现实世界环境中,实际上,重复先前遇到的概念自然地发生,有助于软化对先前知识的破坏。我们倡导更深入地研究替代持续学习场景,其中重复通过传入信息流中的设计集成。从已经现有的提案开始,我们描述了这种级别的级别与重复方案的优势可以提供更全面的持续学习模型的评估。
translated by 谷歌翻译